Conversation
There was a problem hiding this comment.
Singleton-ness of this class is no longer necessary.
Now we carry around a single instance of the AsyncioExecutorPool in a ServerContext object which is owned by the ServerMainLoop
| agent_network_provider: AgentNetworkProvider, | ||
| server_logging: AgentServerLogging): | ||
| server_logging: AgentServerLogging, | ||
| server_context: ServerContext): |
There was a problem hiding this comment.
I was about to plumb the AsyncioExecutorPool all over the place but then realized that the pain of this plumbing is one of the things that lead to the singleton-ness of the PoolProvider solution in the first place.
So now we pass around a ServerContext object which contains global-sh server state ultimately owned by ServerMainLoop. This should alleviate the plumbing burden. Much of what you will see below is (final?) plumbing for this.
Future: It would also make sense to package up the other aspects of this contstructor into a ServiceContext object and plumb that for similar reasons. Enough going on in here already.
| self.llm_factory: ContextTypeLlmFactory = MasterLlmFactory.create_llm_factory(config) | ||
| self.toolbox_factory: ContextTypeToolboxFactory = MasterToolboxFactory.create_toolbox_factory(config) | ||
| self.async_executors_pool: AsyncioExecutorPool = AsyncioExecutorPoolProvider.get_executors_pool() | ||
| self.async_executor_pool: AsyncioExecutorPool = server_context.get_executor_pool() |
There was a problem hiding this comment.
Change variable to singular to match the existing class name. This led to typos.
Seems like maybe we should consolidate the sync/asynchronous versions into an abstract class at some point.
| server_loop_callbacks: ServerLoopCallbacks, | ||
| network_storage_dict: Dict[str, AgentNetworkStorage], | ||
| server_status: ServerStatus, | ||
| server_context: ServerContext, |
There was a problem hiding this comment.
Only plumb the one object that contains both network_storage_dict and server_status.
| for network_storage in self.network_storage_dict.values(): | ||
| network_storage_dict: Dict[str, AgentNetworkStorage] = self.server_context.get_network_storage_dict() | ||
| public_storage: AgentNetworkStorage = network_storage_dict.get("public") | ||
| for network_storage in network_storage_dict.values(): |
There was a problem hiding this comment.
Use the objects on the ServerContext now.
| self.server_name = args.server_name | ||
| self.server_status = ServerStatus(self.server_name) | ||
| server_status = ServerStatus(self.server_name) | ||
| self.server_context.set_server_status(server_status) |
There was a problem hiding this comment.
ServerStatus object needs to be created later and set on the new ServerContext object because of when the name comes in.
| class ServerContext: | ||
| """ | ||
| Class that contains global-ish state for each instance of a server. | ||
| """ |
There was a problem hiding this comment.
Single object that contains server-wide global-ish state.
We do this instead of use singletons, which increases testability.
| from neuro_san.service.http.server.http_server import HttpServer | ||
| from neuro_san.service.main_loop.server_status import ServerStatus | ||
| from neuro_san.service.watcher.main_loop.storage_watcher import StorageWatcher | ||
| from neuro_san.service.utils.server_status import ServerStatus |
There was a problem hiding this comment.
ServerStatus (and ServiceStatus too) moves to service.utils to avoid a tangle
| metadata_str: str = " ".join(sorted(metadata_set)) | ||
|
|
||
| AsyncioExecutorPoolProvider.set_executors_pool(reuse_mode=True) | ||
| server_status: ServerStatus = self.server_context.get_server_status() |
There was a problem hiding this comment.
No longer need the PoolProvider, but we do need to futz with the ServerStatus from here on down, so get that guy from the ServerContext.
| manifest_path: str, | ||
| manifest_update_period_in_seconds: int, | ||
| server_status: ServerStatus): | ||
| server_context: ServerContext): |
There was a problem hiding this comment.
Plumb the ServerContext to the StorageWatcher as well so the other 2 deleted args can come from there.
This ServerContext guy now gets plumbed up and down the service area and eventually is made available to the generic AgentService and the StorageWatcher. The main thrust here is to avoid singletons for testability and not have plumbing be the impediment of a proper object ownership tree. (Note: We should probably do something similar for a ServiceContext object, but there's enough going on in this PR already)
Tested: Chicken scenario on http and grpc